104 research outputs found

    Calculation of arterial wall temperature in atherosclerotic arteries: effect of pulsatile flow, arterial geometry, and plaque structure

    Get PDF
    BACKGROUND: This paper presents calculations of the temperature distribution in an atherosclerotic plaque experiencing an inflammatory process; it analyzes the presence of hot spots in the plaque region and their relationship to blood flow, arterial geometry, and inflammatory cell distribution. Determination of the plaque temperature has become an important topic because plaques showing a temperature inhomogeneity have a higher likelihood of rupture. As a result, monitoring plaque temperature and knowing the factors affecting it can help in the prevention of sudden rupture. METHODS: The transient temperature profile in inflamed atherosclerotic plaques is calculated by solving an energy equation and the Navier-Stokes equations in 2D idealized arterial models of a bending artery and an arterial bifurcation. For obtaining the numerical solution, the commercial package COMSOL 3.2 was used. The calculations correspond to a parametric study where arterial type and size, as well as plaque geometry and composition, are varied. These calculations are used to analyze the contribution of different factors affecting arterial wall temperature measurements. The main factors considered are the metabolic heat production of inflammatory cells, atherosclerotic plaque length l(p), inflammatory cell layer length l(mp), and inflammatory cell layer thickness d(mp). RESULTS: The calculations indicate that the best location to perform the temperature measurement is at the back region of the plaque (0.5 ≤ l/l(p )≤ 0.7). The location of the maximum temperature, or hot spot, at the plaque surface can move during the cardiac cycle depending on the arterial geometry and is a direct result of the blood flow pattern. For the bending artery, the hot spot moves 0.6 millimeters along the longitudinal direction; for the arterial bifurcation, the hot spot is concentrated at a single location due to the flow recirculation observed at both ends of the plaque. Focusing on the thermal history of different points selected at the plaque surface, it is seen that during the cardiac cycle the temperature at a point located at l/l(p )= 0.7 can change between 0.5 and 0.1 degrees Celsius for the bending artery, while no significant variation is observed in the arterial bifurcation. Calculations performed for different values of inflammatory cell layer thickness d(mp )indicate the same behavior reported experimentally; that corresponds to an increase in the maximum temperature observed, which for the bending artery ranges from 0.6 to 2.0 degrees Celsius, for d(mp )= 25 and 100 micrometers, respectively. CONCLUSION: The results indicate that direct temperature measurements should be taken (1) as close as possible to the plaque/lumen surface, as the calculations show a significant drop in temperature within 120 micrometers from the plaque surface; (2) in the presence of blood flow, temperature measurement should be performed in the downstream edge of the plaque, as it shows higher temperature independently of the arterial geometry; and (3) it is necessary to perform measurements at a sampling rate that is higher than the cardiac cycle; the measurement should be extended through several cardiac cycles, as variations of up to 0.7 degrees Celsius were observed at l/l(p )= 0.7 for the bending artery

    Thermal study of vulnerable atherosclerotic plaque

    Get PDF
    Atherosclerotic plaques with high probability of rupture show the presence of a hot spot due to the accumulation of inflammatory cells. This study utilizes two and three dimensional (2-D and 3-D) arterial geometries containing an atherosclerotic plaque experiencing different levels of inflammation and uses models of heat transfer analysis to determine the temperature distribution in the plaque region. The 2-D studies consider three different vessel geometries: a stenotic straight artery, a bending artery and an arterial bifurcation which model a human aorta, a coronary artery and a carotid bifurcation, respectively. The 3-D model considers a stenotic straight artery using realistic and simplified geometries. Three different blood flow cases are considered: steady-state, transient state and blood flow reduction. In the 3-D model, thermal stress produced by local inflammation is estimated to determine the effect of inflammation over plaque stability. For fluid flow and heat transfer analysis, Navier-Stokes equations and energy equation are solved; for structural analysis, the governing equations are expressed in terms of equilibrium equation, constitutive equation, and compatibility condition, which are are solved using the multi-physics software COMSOL 3.3 (COMSOL, Inc.). Our results indicate that the best location to measure plaque temperature in the presence of blood flow is recommended between the middle and the far edge of the plaque. The blood flow reduction leads to a non-uniform temperature increase ranged from 0.1 to 0.25 oC in the plaque/lumen interface. In 3-D realistic model, the multiple measuring points must be considered to decrease the potential error in temperature measurement even within 1 or 2 mm at centerline region of plaque. The most highly thermal stressed regions with the value of 1.45 Pa are observed at the corners of lipid core and the plaque/lumen interface. The mathematical model developed provides a tool to analyze the factors affecting heat transfer at the plaque surface. The results may contribute to the understanding of the relationship between plaque temperature and the likelihood of rupture, and also provide a tool to better understand arterial wall temperature measurements obtained with novel catheters

    Prosodic strengthening on the /s/-stop cluster and the phonetic implementation of an allophonic rule in English

    Get PDF
    AbstractThis acoustic study investigates effects of boundary and prominence on the temporal structure of s#CV and #sCV in English, and on the phonetic implementation of the allophonic rule whereby a voiceless stop after /s/ becomes unaspirated. Results obtained with acoustic temporal measures for /sCV/ sequences showed that the segments at the source of prosodic strengthening (i.e., /s/ in #sCV for boundary marking and the nucleus vowel for prominence marking) were expanded in both absolute and relational terms, whereas other durational components distant from the source (e.g., stop closure duration in #sCV) showed temporal expansion only in the absolute measure. This suggests that speakers make an extra effort to expand the very first segment and the nucleus vowel more than the rest of the sequence in order to signal the pivotal loci of the boundary vs. the prominence information. The potentially ambiguous s#CV and #sCV sequences (e.g., ice#can vs. eye#scan) were never found to be neutralized even in the phrase-internal condition, cuing the underlying syllable structures with fine phonetic detail. Most crucially, an already short lag VOT in #sCV (due to the allophonic rule) was shortened further under prosodic strengthening, which was interpreted as enhancement of the phonetic feature {voiceless unaspirated}. It was proposed that prosodic strengthening makes crucial reference to the phonetic feature system of the language and operates on a phonetic feature, including the one derived by a language-specific allophonic rule. An alternative account was also discussed in gestural terms in the framework of Articulatory Phonology

    Phonetic encoding of coda voicing contrast under different focus conditions in L1 vs. L2 English

    Get PDF
    This study investigated how coda voicing contrast in English would be phonetically encoded in the temporal vs. spectral dimension of the preceding vowel (in vowel duration vs. F1/F2) by Korean L2 speakers of English, and how their L2 phonetic encoding pattern would be compared to that of native English speakers. Crucially, these questions were explored by taking into account the phonetics-prosody interface, testing effects of prominence by comparing target segments in three focus conditions (phonological focus, lexical focus, and no focus). Results showed that Korean speakers utilized the temporal dimension (vowel duration) to encode coda voicing contrast, but failed to use the spectral dimension (F1/F2), reflecting their native language experience—i.e., with a more sparsely populated vowel space in Korean, they are less sensitive to small changes in the spectral dimension, and hence fine-grained spectral cues in English are not readily accessible. Results also showed that along the temporal dimension, both the L1 and L2 speakers hyperarticulated coda voicing contrast under prominence (when phonologically or lexically focused), but hypoarticulated it in the non-prominent condition. This indicates that low-level phonetic realization and high-order information structure interact in a communicatively efficient way, regardless of the speakers’ native language background. The Korean speakers, however, used the temporal phonetic space differently from the way the native speakers did, especially showing less reduction in the no focus condition. This was also attributable to their native language experience—i.e., the Korean speakers’ use of temporal dimension is constrained in a way that is not detrimental to the preservation of coda voicing contrast, given that they failed to add additional cues along the spectral dimension. The results imply that the L2 phonetic system can be more fully illuminated through an investigation of the phonetics-prosody interface in connection with the L2 speakers’ native language experience

    BioHackathon series in 2011 and 2012: penetration of ontology and linked data in life science domains

    Get PDF
    The application of semantic technologies to the integration of biological data and the interoperability of bioinformatics analysis and visualization tools has been the common theme of a series of annual BioHackathons hosted in Japan for the past five years. Here we provide a review of the activities and outcomes from the BioHackathons held in 2011 in Kyoto and 2012 in Toyama. In order to efficiently implement semantic technologies in the life sciences, participants formed various sub-groups and worked on the following topics: Resource Description Framework (RDF) models for specific domains, text mining of the literature, ontology development, essential metadata for biological databases, platforms to enable efficient Semantic Web technology development and interoperability, and the development of applications for Semantic Web data. In this review, we briefly introduce the themes covered by these sub-groups. The observations made, conclusions drawn, and software development projects that emerged from these activities are discussed

    Evaluation of Mixing Downstream of Tees in Duct Systems with Respect to Single Point Representative Air Sampling

    No full text
    Due to the character of the original source materials and the nature of batch digitization, quality control issues may be present in this document. Please report any quality issues you encounter to [email protected], referencing the URI of the item.Includes bibliographical references (leaves 117-120).Issued also on microfiche from Lange Micrographics.Air duct systems in nuclear facilities must meet the requirements of ANSI N13.1-1999 and the Environmental Protection Agency (EPA) that the exhaust airflow be monitored with continuous sampling in case of an accidental release of airborne radionuclides. The continuous air sampling in a duct system is based on the concept of single point representative sampling at the sampling location where the velocity and contaminant profiles are nearly uniform. Sampling must be at a location where there is a uniform distribution via mixing in accordance with ANSI N13.1-1999. The purpose of this work is to identify the sampling locations where the velocity, momentum and contaminant concentrations fall below the 20% coefficient of variation (COV) requirements of ANSI N13.1-1999. Four sets of experiments were conducted on a generic 'T' mixing system. Measurements were made of the velocity, tracer gas concentration, ten micrometer particles and average flow swirl angle. The generic 'T' mixing system included three different combinations of sub duct sizes (6"x6", 9"x9" and 12"x12"), one main duct size (12"x12") and five air velocities (0, 100, 200, 300, and 400 fpm). An air blender was also introduced in some of the tests to promote mixing of the air streams from the main duct and sub duct. The experimental results suggested a turbulent mixing provided the accepted velocity COVs by 6 hydraulic diameters downstream. For similar velocity in the main duct and sub duct, an air blender provided the substantial improvement in 3 hydraulic diameters needed to achieve COVs below 10%. Without an air blender, the distance downstream of the T-junction for the COVs below 20% increased as the velocity of the sub duct airflow increased. About 95% of the cases achieved COVs below 10%. With the air blender, most of the cases with the air blender had the lower COVs than without the blender. However, at an area ratio (sub duct area / main duct area) of 0.25 and above a velocity ratio (velocity in the sub duct / velocity in the main duct) of 3, the air blender proved to be less beneficial for mixing. These results can apply to other duct systems with similar geometries and, ultimately, be a basis for selecting a proper sampling location under the requirements of the single point representative sampling

    An expert technique for optimization of underground mine support system

    No full text

    Local Scheduling in KubeEdge-Based Edge Computing Environment

    No full text
    KubeEdge is an open-source platform that orchestrates containerized Internet of Things (IoT) application services in IoT edge computing environments. Based on Kubernetes, it supports heterogeneous IoT device protocols on edge nodes and provides various functions necessary to build edge computing infrastructure, such as network management between cloud and edge nodes. However, the resulting cloud-based systems are subject to several limitations. In this study, we evaluated the performance of KubeEdge in terms of the computational resource distribution and delay between edge nodes. We found that forwarding traffic between edge nodes degrades the throughput of clusters and causes service delay in edge computing environments. Based on these results, we proposed a local scheduling scheme that handles user traffic locally at each edge node. The performance evaluation results revealed that local scheduling outperforms the existing load-balancing algorithm in the edge computing environment

    Distributed Node Scheduling with Adjustable Weight Factor for Ad-hoc Networks

    No full text
    In this paper, a novel distributed scheduling scheme for an ad-hoc network is proposed. Specifically, the throughput and the delay of packets with different importance are flexibly adjusted by quantifying the importance as weight factors. In this scheme, each node is equipped with two queues, one for packets with high importance and the other for packets with low importance. The proposed scheduling scheme consists of two procedures: intra-node slot reallocation and inter-node reallocation. In the intra-node slot reallocation, self-fairness is adopted as a key metric, which is a composite of the quantified weight factors and traffic loads. This intra-node slot reallocation improves the throughput and the delay performance. Subsequently, through an inter-node reallocation algorithm adopted from LocalVoting (slot exchange among queues having the same importance), the fairness of traffics with the same importance is enhanced. Thorough simulations were conducted under various traffic load and weight factor settings. The simulation results show that the proposed algorithm can adjust packet delivery performance according to a predefined weight factor. Moreover, compared with conventional algorithms, the proposed algorithm achieves better performance in throughput and delay. The low average delay while attaining the high throughput ensures the excellent performance of the proposed algorithm
    • …
    corecore